Link prediction is a crucial problem in graph-structured data. Due to the recent success of graph neural networks (GNNs), a variety of GNN-based models were proposed to tackle the link prediction task. Specifically, GNNs leverage the message passing paradigm to obtain node representation, which relies on link connectivity. However, in a link prediction task, links in the training set are always present while ones in the testing set are not yet formed, resulting in a discrepancy of the connectivity pattern and bias of the learned representation. It leads to a problem of dataset shift which degrades the model performance. In this paper, we first identify the dataset shift problem in the link prediction task and provide theoretical analyses on how existing link prediction methods are vulnerable to it. We then propose FakeEdge, a model-agnostic technique, to address the problem by mitigating the graph topological gap between training and testing sets. Extensive experiments demonstrate the applicability and superiority of FakeEdge on multiple datasets across various domains.
translated by 谷歌翻译
生成的自我监督学习(SSL),尤其是蒙面自动编码器,已成为最令人兴奋的学习范式之一,并且在处理图形数据方面表现出了巨大的潜力。但是,现实世界图总是异质的,它提出了现有方法忽略的三个关键挑战:1)如何捕获复杂的图形结构? 2)如何合并各种节点属性? 3)如何编码不同的节点位置?鉴于此,我们研究了异质图上生成SSL的问题,并提出了HGMAE,这是一种新型的异质图掩盖自动编码器模型,以应对这些挑战。 HGMAE通过两种创新的掩蔽技术和三种独特的培训策略捕获了全面的图形信息。特别是,我们首先使用动态掩模速率开发Metapath掩盖和自适应属性掩蔽,以实现在异质图上有效和稳定的学习。然后,我们设计了几种培训策略,包括基于Metapath的边缘重建,以采用复杂的结构信息,目标属性恢复以结合各种节点属性,以及位置特征预测以编码节点位置信息。广泛的实验表明,HGMAE在多个数据集上的几个任务上均优于对比度和生成的最新基准。
translated by 谷歌翻译
域自适应文本分类对于大规模预处理的语言模型来说是一个具有挑战性的问题,因为它们通常需要昂贵的额外标记数据来适应新域。现有作品通常无法利用跨域单词之间的隐式关系。在本文中,我们提出了一种新的方法,称为结构化知识(DASK)的域适应性,以通过利用单词级别的语义关系来增强域的适应性。 Dask首先构建知识图,以捕获目标域中的枢轴项(独立域单词)和非居式项之间的关系。然后在训练期间,DASK注入与源域文本的枢轴相关知识图信息。对于下游任务,这些注入知识的文本被馈入能够处理知识注入文本数据的BERT变体。多亏了知识注入,我们的模型根据与枢轴的关系学习了非客者的域不变特征。 DASK通过在使用伪标签训练期间通过候选枢轴的极性得分动态推断出具有域不变行为的枢轴。我们在各种跨域情绪分类任务上验证了DASK,并观察到20种不同领域对的基准的绝对性能提高了2.9%。代码将在https://github.com/hikaru-nara/dask上提供。
translated by 谷歌翻译
数据增强在大型神经网络的培训中很受欢迎;但是,目前,关于如何使用增强数据的不同算法选择之间没有明确的理论比较。在本文中,我们朝这个方向迈出了一步 - 我们首先提出了对线性回归的简单新颖的分析,该分析具有标签不变性增强,这表明数据增强一致性(DAC)本质上比对增强数据的经验风险最小化更为有效(DA- erm)。然后将分析扩展到误指定的增强(即更改标签的增强),这再次证明了DAC比DA-MERM的优点。此外,我们将分析扩展到非线性模型(例如神经网络)并呈现泛化范围。最后,我们使用CIFAR-100和WIDERESNET进行DAC和DA-MER之间的DAC和DA-MER之间进行干净和苹果对比较的实验;这些共同证明了DAC的效果。
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
Despite significant progress in object categorization, in recent years, a number of important challenges remain; mainly, the ability to learn from limited labeled data and to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot, generalized zero-shot and open set recognition using a unified framework. Specifically, we propose a weighted maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms. Distance constraints ensure that labeled samples are projected closer to their correct prototypes, in the embedding space, than to others. We illustrate that resulting model shows improvements in supervised, zero-shot, generalized zero-shot, and large open set recognition, with up to 310K class vocabulary on Animal with Attributes and ImageNet datasets.
translated by 谷歌翻译
Advances in computer vision and machine learning techniques have led to significant development in 2D and 3D human pose estimation from RGB cameras, LiDAR, and radars. However, human pose estimation from images is adversely affected by occlusion and lighting, which are common in many scenarios of interest. Radar and LiDAR technologies, on the other hand, need specialized hardware that is expensive and power-intensive. Furthermore, placing these sensors in non-public areas raises significant privacy concerns. To address these limitations, recent research has explored the use of WiFi antennas (1D sensors) for body segmentation and key-point body detection. This paper further expands on the use of the WiFi signal in combination with deep learning architectures, commonly used in computer vision, to estimate dense human pose correspondence. We developed a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input. This paves the way for low-cost, broadly accessible, and privacy-preserving algorithms for human sensing.
translated by 谷歌翻译
With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few training examples. It has been a new trend exploring ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress, challenges, and future work in ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques of ICL, including training strategies, prompting strategies, and so on. Finally, we present the challenges of ICL and provide potential directions for further research. We hope our work can encourage more research on uncovering how ICL works and improving ICL in future work.
translated by 谷歌翻译
Designing better deep networks and better reinforcement learning (RL) algorithms are both important for deep RL. This work focuses on the former. Previous methods build the network with several modules like CNN, LSTM and Attention. Recent methods combine the Transformer with these modules for better performance. However, it requires tedious optimization skills to train a network composed of mixed modules, making these methods inconvenient to be used in practice. In this paper, we propose to design \emph{pure Transformer-based networks} for deep RL, aiming at providing off-the-shelf backbones for both the online and offline settings. Specifically, the Transformer in Transformer (TIT) backbone is proposed, which cascades two Transformers in a very natural way: the inner one is used to process a single observation, while the outer one is responsible for processing the observation history; combining both is expected to extract spatial-temporal representations for good decision-making. Experiments show that TIT can achieve satisfactory performance in different settings, consistently.
translated by 谷歌翻译
Recently the deep learning has shown its advantage in representation learning and clustering for time series data. Despite the considerable progress, the existing deep time series clustering approaches mostly seek to train the deep neural network by some instance reconstruction based or cluster distribution based objective, which, however, lack the ability to exploit the sample-wise (or augmentation-wise) contrastive information or even the higher-level (e.g., cluster-level) contrastiveness for learning discriminative and clustering-friendly representations. In light of this, this paper presents a deep temporal contrastive clustering (DTCC) approach, which for the first time, to our knowledge, incorporates the contrastive learning paradigm into the deep time series clustering research. Specifically, with two parallel views generated from the original time series and their augmentations, we utilize two identical auto-encoders to learn the corresponding representations, and in the meantime perform the cluster distribution learning by incorporating a k-means objective. Further, two levels of contrastive learning are simultaneously enforced to capture the instance-level and cluster-level contrastive information, respectively. With the reconstruction loss of the auto-encoder, the cluster distribution loss, and the two levels of contrastive losses jointly optimized, the network architecture is trained in a self-supervised manner and the clustering result can thereby be obtained. Experiments on a variety of time series datasets demonstrate the superiority of our DTCC approach over the state-of-the-art.
translated by 谷歌翻译